22 research outputs found
Object Level Deep Feature Pooling for Compact Image Representation
Convolutional Neural Network (CNN) features have been successfully employed
in recent works as an image descriptor for various vision tasks. But the
inability of the deep CNN features to exhibit invariance to geometric
transformations and object compositions poses a great challenge for image
search. In this work, we demonstrate the effectiveness of the objectness prior
over the deep CNN features of image regions for obtaining an invariant image
representation. The proposed approach represents the image as a vector of
pooled CNN features describing the underlying objects. This representation
provides robustness to spatial layout of the objects in the scene and achieves
invariance to general geometric transformations, such as translation, rotation
and scaling. The proposed approach also leads to a compact representation of
the scene, making each image occupy a smaller memory footprint. Experiments
show that the proposed representation achieves state of the art retrieval
results on a set of challenging benchmark image datasets, while maintaining a
compact representation.Comment: Deep Vision 201
NAG: Network for Adversary Generation
Adversarial perturbations can pose a serious threat for deploying machine
learning systems. Recent works have shown existence of image-agnostic
perturbations that can fool classifiers over most natural images. Existing
methods present optimization approaches that solve for a fooling objective with
an imperceptibility constraint to craft the perturbations. However, for a given
classifier, they generate one perturbation at a time, which is a single
instance from the manifold of adversarial perturbations. Also, in order to
build robust models, it is essential to explore the manifold of adversarial
perturbations. In this paper, we propose for the first time, a generative
approach to model the distribution of adversarial perturbations. The
architecture of the proposed model is inspired from that of GANs and is trained
using fooling and diversity objectives. Our trained generator network attempts
to capture the distribution of adversarial perturbations for a given classifier
and readily generates a wide variety of such perturbations. Our experimental
evaluation demonstrates that perturbations crafted by our model (i) achieve
state-of-the-art fooling rates, (ii) exhibit wide variety and (iii) deliver
excellent cross model generalizability. Our work can be deemed as an important
step in the process of inferring about the complex manifolds of adversarial
perturbations.Comment: CVPR 201
Dataset Condensation with Gradient Matching
As the state-of-the-art machine learning methods in many fields rely on
larger datasets, storing datasets and training models on them become
significantly more expensive. This paper proposes a training set synthesis
technique for data-efficient learning, called Dataset Condensation, that learns
to condense large dataset into a small set of informative synthetic samples for
training deep neural networks from scratch. We formulate this goal as a
gradient matching problem between the gradients of deep neural network weights
that are trained on the original and our synthetic data. We rigorously evaluate
its performance in several computer vision benchmarks and demonstrate that it
significantly outperforms the state-of-the-art methods. Finally we explore the
use of our method in continual learning and neural architecture search and
report promising gains when limited memory and computations are available